Biased gradient squared descent saddle point finding method.
نویسندگان
چکیده
The harmonic approximation to transition state theory simplifies the problem of calculating a chemical reaction rate to identifying relevant low energy saddle points in a chemical system. Here, we present a saddle point finding method which does not require knowledge of specific product states. In the method, the potential energy landscape is transformed into the square of the gradient, which converts all critical points of the original potential energy surface into global minima. A biasing term is added to the gradient squared landscape to stabilize the low energy saddle points near a minimum of interest, and destabilize other critical points. We demonstrate that this method is competitive with the dimer min-mode following method in terms of the number of force evaluations required to find a set of low-energy saddle points around a reactant minimum.
منابع مشابه
Accelerated gradient sliding for structured convex optimization
Our main goal in this paper is to show that one can skip gradient computations for gradient descent type methods applied to certain structured convex programming (CP) problems. To this end, we first present an accelerated gradient sliding (AGS) method for minimizing the summation of two smooth convex functions with different Lipschitz constants. We show that the AGS method can skip the gradient...
متن کاملThe Power of Normalization: Faster Evasion of Saddle Points
A commonly used heuristic in non-convex optimization is Normalized Gradient Descent (NGD) a variant of gradient descent in which only the direction of the gradient is taken into account and its magnitude ignored. We analyze this heuristic and show that with carefully chosen parameters and noise injection, this method can provably evade saddle points. We establish the convergence of NGD to a loc...
متن کاملAccelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent
Nesterov's accelerated gradient descent (AGD), an instance of the general family of"momentum methods", provably achieves faster convergence rate than gradient descent (GD) in the convex setting. However, whether these methods are superior to GD in the nonconvex setting remains open. This paper studies a simple variant of AGD, and shows that it escapes saddle points and finds a second-order stat...
متن کاملStochastic Parallel Block Coordinate Descent for Large-Scale Saddle Point Problems
We consider convex-concave saddle point problems with a separable structure and non-strongly convex functions. We propose an efficient stochastic block coordinate descent method using adaptive primal-dual updates, which enables flexible parallel optimization for large-scale problems. Our method shares the efficiency and flexibility of block coordinate descent methods with the simplicity of prim...
متن کاملSaddle-Point Dynamics: Conditions for Asymptotic Stability of Saddle Points
This paper considers continuously differentiable functions of two vector variables that have (possibly a continuum of) min-max saddle points. We study the asymptotic convergence properties of the associated saddle-point dynamics (gradient-descent in the first variable and gradientascent in the second one). We identify a suite of complementary conditions under which the set of saddle points is a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- The Journal of chemical physics
دوره 140 19 شماره
صفحات -
تاریخ انتشار 2014